20 research outputs found
Two-View Matching with View Synthesis Revisited
Wide-baseline matching focussing on problems with extreme viewpoint change is
considered. We introduce the use of view synthesis with affine-covariant
detectors to solve such problems and show that matching with the Hessian-Affine
or MSER detectors outperforms the state-of-the-art ASIFT.
To minimise the loss of speed caused by view synthesis, we propose the
Matching On Demand with view Synthesis algorithm (MODS) that uses progressively
more synthesized images and more (time-consuming) detectors until reliable
estimation of geometry is possible. We show experimentally that the MODS
algorithm solves problems beyond the state-of-the-art and yet is comparable in
speed to standard wide-baseline matchers on simpler problems.
Minor contributions include an improved method for tentative correspondence
selection, applicable both with and without view synthesis and a view synthesis
setup greatly improving MSER robustness to blur and scale change that increase
its running time by 10% only.Comment: 25 pages, 14 figure
WxBS: Wide Baseline Stereo Generalizations
We have presented a new problem -- the wide multiple baseline stereo (WxBS)
-- which considers matching of images that simultaneously differ in more than
one image acquisition factor such as viewpoint, illumination, sensor type or
where object appearance changes significantly, e.g. over time. A new dataset
with the ground truth for evaluation of matching algorithms has been introduced
and will be made public.
We have extensively tested a large set of popular and recent detectors and
descriptors and show than the combination of RootSIFT and HalfRootSIFT as
descriptors with MSER and Hessian-Affine detectors works best for many
different nuisance factors. We show that simple adaptive thresholding improves
Hessian-Affine, DoG, MSER (and possibly other) detectors and allows to use them
on infrared and low contrast images.
A novel matching algorithm for addressing the WxBS problem has been
introduced. We have shown experimentally that the WxBS-M matcher dominantes the
state-of-the-art methods both on both the new and existing datasets.Comment: Descriptor and detector evaluation expande
HarrisZ: Harris Corner Selection for Next-Gen Image Matching Pipelines
Due to its role in many computer vision tasks, image matching has been
subjected to an active investigation by researchers, which has lead to better
and more discriminant feature descriptors and to more robust matching
strategies, also thanks to the advent of the deep learning and the increased
computational power of the modern hardware. Despite of these achievements, the
keypoint extraction process at the base of the image matching pipeline has not
seen equivalent progresses. This paper presents HarrisZ, an upgrade to the
HarrisZ corner detector, optimized to synergically take advance of the recent
improvements of the other steps of the image matching pipeline. HarrisZ
does not only consists of a tuning of the setup parameters, but introduces
further refinements to the selection criteria delineated by HarrisZ, so
providing more, yet discriminative, keypoints, which are better distributed on
the image and with higher localization accuracy. The image matching pipeline
including HarrisZ, together with the other modern components, obtained in
different recent matching benchmarks state-of-the-art results among the classic
image matching pipelines. These results are quite close to those obtained by
the more recent fully deep end-to-end trainable approaches and show that there
is still a proper margin of improvement that can be granted by the research in
classic image matching methods
DeblurGAN: Blind Motion Deblurring Using Conditional Adversarial Networks
We present DeblurGAN, an end-to-end learned method for motion deblurring. The
learning is based on a conditional GAN and the content loss . DeblurGAN
achieves state-of-the art performance both in the structural similarity measure
and visual appearance. The quality of the deblurring model is also evaluated in
a novel way on a real-world problem -- object detection on (de-)blurred images.
The method is 5 times faster than the closest competitor -- DeepDeblur. We also
introduce a novel method for generating synthetic motion blurred images from
sharp ones, allowing realistic dataset augmentation.
The model, code and the dataset are available at
https://github.com/KupynOrest/DeblurGANComment: CVPR 2018 camera-read